33 research outputs found

    Bilingual toddlers show increased attention capture by static faces compared to monolinguals

    Get PDF
    Bilingual infants rely differently than monolinguals on facial information, such as lip patterns, to differentiate their native languages. This may explain, at least in part, why young monolinguals and bilinguals show differences in social attention. For example, in the first year, bilinguals attend faster and more often to static faces over non-faces than do monolinguals (Mercure et al., 2018). However, the developmental trajectories of these differences are unknown. In this pre-registered study, data were collected from 15- to 18-month-old monolinguals (English) and bilinguals (English and another language) to test whether group differences in face-looking behaviour persist into the second year. We predicted that bilinguals would orient more rapidly and more often to static faces than monolinguals. Results supported the first but not the second hypothesis. This suggests that, even into the second year of life, toddlers’ rapid visual orientation to static social stimuli is sensitive to early language experience

    A Generative Model of Speech Production in Broca’s and Wernicke’s Areas

    Get PDF
    Speech production involves the generation of an auditory signal from the articulators and vocal tract. When the intended auditory signal does not match the produced sounds, subsequent articulatory commands can be adjusted to reduce the difference between the intended and produced sounds. This requires an internal model of the intended speech output that can be compared to the produced speech. The aim of this functional imaging study was to identify brain activation related to the internal model of speech production after activation related to vocalization, auditory feedback, and movement in the articulators had been controlled. There were four conditions: silent articulation of speech, non-speech mouth movements, finger tapping, and visual fixation. In the speech conditions, participants produced the mouth movements associated with the words “one” and “three.” We eliminated auditory feedback from the spoken output by instructing participants to articulate these words without producing any sound. The non-speech mouth movement conditions involved lip pursing and tongue protrusions to control for movement in the articulators. The main difference between our speech and non-speech mouth movement conditions is that prior experience producing speech sounds leads to the automatic and covert generation of auditory and phonological associations that may play a role in predicting auditory feedback. We found that, relative to non-speech mouth movements, silent speech activated Broca’s area in the left dorsal pars opercularis and Wernicke’s area in the left posterior superior temporal sulcus. We discuss these results in the context of a generative model of speech production and propose that Broca’s and Wernicke’s areas may be involved in predicting the speech output that follows articulation. These predictions could provide a mechanism by which rapid movement of the articulators is precisely matched to the intended speech outputs during future articulations

    Investigating language lateralization during phonological and semantic fluency tasks using functional transcranial Doppler sonography.

    Get PDF
    Although there is consensus that the left hemisphere plays a critical role in language processing, some questions remain. Here we examine the influence of overt versus covert speech production on lateralization, the relationship between lateralization and behavioural measures of language performance and the strength of lateralization across the subcomponents of language. The present study used functional transcranial Doppler sonography (fTCD) to investigate lateralization of phonological and semantic fluency during both overt and covert word generation in right-handed adults. The laterality index (LI) was left lateralized in all conditions, and there was no difference in the strength of LI between overt and covert speech. This supports the validity of using overt speech in fTCD studies, another benefit of which is a reliable measure of speech production

    Stimulus rate increases lateralisation in linguistic and non-linguistic tasks measured by functional transcranial Doppler sonography

    Get PDF
    Studies to date that have used fTCD to examine language lateralisation have predominantly used word or sentence generation tasks. Here we sought to further assess the sensitivity of fTCD to language lateralisation by using a metalinguistic task which does not involve novel speech generation: rhyme judgement in response to written words. Line array judgement was included as a non-linguistic visuospatial task to examine the relative strength of left and right hemisphere lateralisation within the same individuals when output requirements of the tasks are matched. These externally paced tasks allowed us to manipulate the number of stimuli presented to participants and thus assess the influence of pace on the strength of lateralisation.In Experiment 1, 28 right-handed adults participated in rhyme and line array judgement tasks and showed reliable left and right lateralisation at the group level for each task, respectively. In Experiment 2 we increased the pace of the tasks, presenting more stimuli per trial. We measured laterality indices (LIs) from 18 participants who performed both linguistic and non-linguistic judgement tasks during the original 'slow' presentation rate (5 judgements per trial) and a fast presentation rate (10 judgements per trial). The increase in pace led to increased strength of lateralisation in both the rhyme and line conditions.Our results demonstrate for the first time that fTCD is sensitive to the left lateralised processes involved in metalinguistic judgements. Our data also suggest that changes in the strength of language lateralisation, as measured by fTCD, are not driven by articulatory demands alone. The current results suggest that at least one aspect of task difficulty, the pace of stimulus presentation, influences the strength of lateralisation during both linguistic and non-linguistic tasks

    Evidence for shared conceptual representations for sign and speech

    Get PDF
    Do different languages evoke different conceptual representations? If so, greatest divergence might be expected between languages that differ most in structure, such as sign and speech. Unlike speech bilinguals, hearing sign-speech bilinguals use languages conveyed in different modalities. We used functional magnetic resonance imaging and representational similarity analysis (RSA) to quantify the similarity of semantic representations elicited by the same concepts presented in spoken British English and British Sign Language in hearing, early sign-speech bilinguals. We found shared representations for semantic categories in left posterior middle and inferior temporal cortex. Despite shared category representations, the same spoken words and signs did not elicit similar neural patterns. Thus, contrary to previous univariate activation-based analyses of speech and sign perception, we show that semantic representations evoked by speech and sign are only partially shared. This demonstrates the unique perspective that sign languages and RSA provide in understanding how language influences conceptual representation

    Microstructural differences in the thalamus and thalamic radiations in the congenitally deaf

    Get PDF
    There is evidence of both crossmodal and intermodal plasticity in the deaf brain. Here, we investigated whether sub-cortical plasticity, specifically of the thalamus, contributed to this reorganisation. We contrasted diffusion weighted magnetic resonance imaging data from 13 congenitally deaf and 13 hearing participants, all of whom had learnt British Sign Language after 10 years of age. Connectivity based segmentation of the thalamus revealed changes to mean and radial diffusivity in occipital and frontal regions, which may be linked to enhanced peripheral visual acuity, and differences in how visual attention is deployed in the deaf group. Using probabilistic tractography, tracts were traced between the thalamus and its cortical targets, and microstructural measurements were extracted from these tracts. Group differences were found in microstructural measurements of occipital, frontal, somatosensory, motor and parietal thalamo-cortical tracts. Our findings suggest there is sub-cortical plasticity in the deaf brain, and that white matter alterations can be found throughout the deaf brain, rather than being restricted to, or focussed in auditory cortex

    Computerized speechreading training for deaf children: A randomised controlled trial

    Get PDF
    Purpose: We developed and evaluated in a randomised controlled trial a computerised speechreading training programme to determine a) whether it is possible to train speechreading in deaf children and b) whether speechreading training results in improvements in phonological and reading skills. Previous studies indicate a relationship between speechreading and reading skill and further suggest this relationship may be mediated by improved phonological representations. This is important since many deaf children find learning to read to be very challenging. Method: Sixty-six deaf 5-7 year olds were randomised into speechreading and maths training arms. Each training programme was comprised of 10 minute sessions a day, 4 days a week for 12 weeks. Children were assessed on a battery of language and literacy measures before training, immediately after training, 3 months and 10 months after training. Results: We found no significant benefits for participants who completed the speechreading training, compared to those who completed the maths training, on the speechreading primary outcome measure. However, significantly greater gains were observed in the speechreading training group on one of the secondary measures of speechreading. There was also some evidence of beneficial effects of the speechreading training on phonological representations, however these effects were weaker. No benefits were seen to word reading. Conclusions: Speechreading skill is trainable in deaf children. However, to support early reading, training may need to be longer or embedded in a broader literacy programme. Nevertheless, a training tool that can improve speechreading is likely to be of great interest to professionals working with deaf children
    corecore